![]() device and method of encoding video data.
专利摘要:
It is a device that can determine the possibility of enabling or disabling bidirectional optical flow (BIO) for a current encoding unit (UC) (for example, block and / or sub-block). The forecast information for the UC can be identified and can include forecast signals associated with a first reference block and a second reference block (for example, or a first reference sub-block and a second reference sub-block) . A forecast difference can be calculated and can be used to determine the similarity between the two forecast signals. The UC can be reconstructed based on similarity. For example, the decision to rebuild the UC with BIO enabled or BIO disabled may be based on whether the forecast signals are similar or not. The enabling of the BIO for the UC can be determined when the two forecast signals are determined to be different. For example, the UC can be rebuilt with BIO disabled when the two forecast signals are determined to be similar. 公开号:BR112020000032A2 申请号:R112020000032-9 申请日:2018-07-03 公开日:2020-07-14 发明作者:Xiaoyu Xiu;Yuwen He;Yan Ye 申请人:Vid Scale, Inc.; IPC主号:
专利说明:
[001] [001] This application claims the benefit of: provisional US patent application No. 62 / 528,296, filed on July 3, 2017; US provisional patent application No. 62 / 560,823, filed on September 20, 2017; US provisional patent application No. 62 / 564,598, filed September 28, 2017; US provisional patent application No. 62 / 579,559, filed on October 31, 2017; and provisional US patent application No. 62 / 599,241, filed on December 15, 2017, the content of which is incorporated herein by reference. BACKGROUND [002] [002] Video encoding systems are widely used to compress digital video signals to reduce the need for storage and / or transmission bandwidth for such signals. There are several types of video encoding systems, such as block-based, wavelet-based and object-based systems. Currently, hybrid block-based video encoding systems are widely used and / or deployed. Examples of block-based video encoding systems include international video encoding standards such as MPEG1 / 2/4 part 2, H.264 / MPEG-4 part 10 AVC, VC-1 and the latest video encoding standard recent call HEVC (High Efficiency Video Coding), which was developed by JCT-VC (Joint Collaborative Team on Video Coding) of ITU-T / SG16 / Q.6 / VCEG and ISO / IEC / MPEG. SUMMARY [003] [003] It is a device to perform the encoding of video data that can be configured to determine the possibility of enabling or disabling bidirectional optical flow (BIO) for a current encoding unit (UC) (for example, a block and / or a sub-block). The forecast information for the current coding unit can be [004] [004] The forecast difference, which can be used to determine the similarity between two forecast signals, can be determined in several ways. For example, calculating the forecast difference may include calculating an average difference between the respective sample values of two reference blocks associated with the two forecast signals. The sample values can be interpolated from their respective reference blocks. For example, calculating the forecast difference may include calculating an average motion vector difference between the respective motion vectors of two reference blocks associated with the two forecast signals. The motion vectors can be scaled based on the time distance between a reference image and a current coding unit. [005] [005] The similarity between the two forecast signals can be [006] [006] A device for performing video data encoding can be configured to group one or more sub-blocks into a group of sub-blocks. For example, contiguous sub-blocks that have similar movement information can be grouped into a group of sub-blocks. Groups of sub-blocks can vary in shape and size, and can be formed based on the shape and / or size of a current coding unit. The sub-blocks can be grouped horizontally and / or vertically. A motion compensation operation (for example, a single motion compensation operation) can be performed on the group of sub-blocks. BIO refinement can be performed for the group of sub-blocks. For example, BIO refinement can be based on the gradient values of the subblock of the subblock group. [007] [007] BIO gradients can be derived so that an acceleration based on SIMD (Single Instruction, Multiple Data) can be used. In one or more techniques, the BIO gradient can be derived by applying interpolation filters and gradient filters when horizontal filtration can be performed followed by vertical filtration. In the BIO gradient derivation, rounding operations can be performed on the input values, which can be implemented by [008] [008] Devices, processes and assistances are revealed to skip BIO operations at the motion compensation (CM) stage (eg, block level) (eg, regular) of a video encoder and / or decoder. In one or more techniques, the BIO operation can be (for example, partially or completely) disabled for one or more blocks / sub-blocks when one or more factors / conditions can be satisfied. BIO can be disabled for block (s) / sub-block (s) that are encoded in / by a bilateral FRUC (Frame-Rate Up Conversion) mode. BIO can be disabled for block (s) / sub-block (s) that are predicted by at least two motion vectors that are approximately proportional in the time domain. The BIO can be disabled when the average difference between at least two forecast blocks is less than, or equal to, a predefined / predetermined limit. The BIO can be disabled based on gradient information. [009] [009] A decoding device for encoding video data may comprise a memory. A decoding device can comprise a processor. The processor can be configured to identify a plurality of sub-blocks of at least one coding unit (UC). The processor can be configured to select one or more sub-blocks from the plurality of sub-blocks for CM. The processor can be configured to determine a state of a CM condition as satisfied, or not satisfied. The processor can be configured to initiate motion compensation without BIO motion refinement processing for one or more sub-blocks if the state of the CM condition is satisfied. The processor can be configured to initiate motion compensation with BIO motion refinement processing for one or more sub-blocks if the state of the CM condition is not satisfied. [0010] [0010] Similar reference numbers in the figures indicate similar elements. [0011] [0011] Figure 1 illustrates a general diagram illustrating a block-based video encoder. [0012] [0012] Figure 2 illustrates a general block diagram exemplifying video decoder. [0013] [0013] Figure 3 illustrates an exemplary bidirectional optical flow. [0014] [0014] Figures 4A and 4B illustrate an exemplary gradient derivation process in the BIO with 1/16 pel motion accuracy. [0015] [0015] Figure 5A illustrates an access to the BIO example memory without a block extension restriction. [0016] [0016] Figure 5B illustrates an access to the BIO example memory with a block extension restriction. [0017] [0017] Figure 6 illustrates an exemplary advanced temporal movement vector forecast. [0018] [0018] Figure 7 illustrates an exemplary spatial temporal movement vector forecast. [0019] [0019] Figure 8A illustrates an example FRUC process with model matching. [0020] [0020] Figure 8B illustrates an exemplary FRUC process with bilateral correspondence. [0021] [0021] Figure 9A illustrates an exemplary affine mode with a simplified affine model. [0022] [0022] Figure 9B illustrates an exemplary affine mode with a sub-block level derivation of movement for related blocks. [0023] [0023] Figure 10A illustrates a 2D gradient filtering process exemplifying a BIO, in which the dashed arrows indicate the direction of filtering for the horizontal gradient derivation. [0024] [0024] Figure 10B illustrates a 2D gradient filtering process exemplifying a BIO, in which the dashed arrows indicate the filtering direction for the vertical gradient derivation. [0025] [0025] Figure 11 illustrates an exemplary movement compensation process. [0026] [0026] Figure 12A illustrates a modified 2D gradient filtering process exemplifying the BIO, in which the dashed arrows indicate the filtering directions for the horizontal gradient derivation. [0027] [0027] Figure 12B illustrates a modified 2D gradient filtering process exemplifying the BIO, in which the dashed arrows indicate the filtering directions for the vertical gradient derivation. [0028] [0028] Figure 13 illustrates an exemplary mapping function of a rounding method for BIO gradient derivation. [0029] [0029] Figure 14 illustrates an exemplary mapping function of a rounding method for BIO gradient derivation. [0030] [0030] Figure 15A illustrates an exemplary gradient derivation process in the BIO with a movement accuracy of 1/16 pel. [0031] [0031] Figure 15B illustrates an exemplary gradient derivation process in the BIO with a movement accuracy of 1/16 pel. [0032] [0032] Figure 16A illustrates an exemplary comparison of various motion compensations based on sub-blocks. [0033] [0033] Figure 16B illustrates an exemplary comparison of various motion compensations based on sub-blocks with merging of sub-blocks. [0034] [0034] Figure 16C illustrates an exemplary comparison of various motion compensations based on sub-blocks with merging of 2D sub-blocks. [0035] [0035] Figure 17A illustrates an example indication of samples that are affected by the BIO block extension restriction with a sub-block motion compensation method applied. [0036] [0036] Figure 17B illustrates an exemplary indication of samples that are affected by the BIO block extension restriction with motion compensation based on 2D sub-block merging. [0037] [0037] Figure 18A illustrates an exemplary sub-block merge implementation in relation to rows. [0038] [0038] Figure 18B illustrates an exemplary sub-block merging implementation in relation to columns. [0039] [0039] Figure 19 illustrates an exemplary overlapping block movement compensation. [0040] [0040] Figure 20 illustrates an exemplary movement compensation process. [0041] [0041] Figure 21 illustrates an exemplary motion compensation process. [0042] [0042] Figure 22 illustrates an exemplary motion compensation process after ignoring the BIO for blocks encoded in / by a bilateral FRUC mode. [0043] [0043] Figure 23 illustrates an exemplary motion compensation process after ignoring the BIO based on the difference in the motion vector. [0044] [0044] Figure 24 illustrates an exemplary motion compensation process after ignoring the BIO based on a difference between at least two forecast signals. [0045] [0045] Figure 25 illustrates an exemplary motion compensation process after ignoring the BIO based on gradient information. [0046] [0046] Figure 26 illustrates a process of compensation of [0047] [0047] Figure 27 illustrates an exemplary biprediction process by averaging the two intermediate forecast signals in high precision. [0048] [0048] Figure 28A is a system diagram of an exemplary communications system in which one or more revealed modalities can be implemented. [0049] [0049] Figure 28B is a system diagram of an exemplary wireless transmission / reception unit (WTRU - Wireless Transmit / Receive Unit) that can be used in the communications system illustrated in Figure 28A. [0050] [0050] Figure 28C is a system diagram of an exemplary radio access network (RAN) and an exemplary core network (CN) that can be used in the communication system illustrated in Figure 28A. [0051] [0051] Figure 28D is a system diagram of an additional exemplary RAN and an additional exemplary CN that can be used in the communication system illustrated in Figure 28A. DETAILED DESCRIPTION [0052] [0052] A detailed description of the illustrative modalities will now be described with reference to the various Figures. Although this description provides a detailed example of possible implementations, it should be noted that the details are intended to be exemplary and in no way limit the scope of the request. [0053] [0053] Figure 1 illustrates an example block diagram of a hybrid video coding system based on blocks. Input video signal 1102 can be processed block by block. Extended block sizes (called "encoding unit" or UC) can be used to compress high-resolution (1080p and higher) video signals. [0054] [0054] The forecast block can be subtracted from the current video block (1116). The forecast residual can be de-correlated using transform (1104) and / or quantization (1106). The quantized residual coefficients can be quantized in an inverse way (1110) and / or transformed in an inverse way (1112) to form the reconstructed residual, which can be added back to the forecast block (1126) to form the [0055] [0055] Figure 2 illustrates a general block diagram exemplifying a block-based video decoder. The video bit stream 202 can be uncompressed and / or entropy decoded in the entropy decoding unit 208. The encoding mode and / or forecast information can be sent to the spatial forecasting unit 260 (for example, if intrododed) ) and / or for the weather forecast unit 262 (for example, if intercoded) to form the forecast block. Residual transform coefficients can be sent to an inverse quantization unit 210 and / or an inverse transform unit 212 to reconstruct the residual block. The forecast block and the residual block can be added at 226. The reconstructed block can undergo circuit filtering, perhaps for example before it can be stored in the reference image storage 264. The reconstructed video in the reference image storage it can be sent to trigger a display device and / or it can be used to predict future video blocks. [0056] [0056] As shown in Figure 1 and / or Figure 2, the spatial forecast (for example, intraprevision), the temporal forecast (for example, interprevision), transformed, quantization, entropy coding and / or [0057] [0057] With the combination of the optical flow equation (1) and the interpolation of the forecast blocks along the movement path (for example, as shown in Figure 3), the BIO forecast can be obtained as: (2) where and can indicate time distances from [0058] [0058] In Figure 3, (MVx0, MVy0) and (MVx1, MVy1) can indicate the motion vectors at the block level that can be used to generate the two forecast blocks e. In addition, the movement refinement at the sample location (x, y) can be calculated by minimizing the difference ∆ between the sample values after the motion refinement compensation (for example, A and B in Figure 3), as shown as : (4) [0059] [0059] Perhaps to guarantee the regularity of the refinement of the derived movement, for example, it can be assumed that the refinement of the movement is consistent within a local surrounding area centered on (x, y). In an example of a BIO design, the values of can be derived by minimizing ∆ within the 5x5 window Ω around the current sample in (x, y) as: (5) [0060] [0060] BIO can be applied to bi-forecast blocks, which can be predicted by two reference blocks from temporally neighboring images. The BIO can be enabled without sending additional information from the encoder to the decoder. The BIO can be applied to some predicted bidirectional blocks that have forward and backward forecast signals (for example,). If, for example, the two forecast blocks from the current blocks come from the same direction (either forward direction or [0061] [0061] If, for example, the two forecast blocks of the current block are of the same reference image (for example,), the BIO can be disabled. When local lighting compensation (CIL) is used for the current block, the BIO can be disabled. [0062] [0062] As shown in (2) and (4), perhaps in addition to the CM at the block level, gradients can be derived in the BIO for a sample of a motion compensated block (e.g., e) (e.g., to derive the refinement of local movement and / or generate the final forecast at the sample location). In BIO, the horizontal and vertical gradients of samples in the forecast blocks (for example,, and,) can be calculated at the same time that the forecast signals are generated based on filtering processes that can be consistent with the interpolation of motion compensation (eg separable 2D finite impulse response (FIR) filters). The input to the gradient derivative process can be the reference sample tables used for motion compensation and fractional components (fracX, fracY) of the input movement (MVx0 / x1, MVy0 / y1). [0063] [0063] To derive the gradient values at the sample positions (for example, each sample position), different filters (for example, an interpolation filter hL and a gradient filter hG) can be applied separately, perhaps, for example , in different orders by the direction of the gradient that can be calculated. When deriving horizontal gradients (for [0064] [0064] Figures 4A and 4B illustrate an exemplary gradient derivation process applied in the BIO, showing that the sample values in whole sample positions are shown with squares with a pattern and the sample values in fractional sample positions are shown. shown with blank squares. The accuracy of the motion vector can be increased to 1/16 pel, and there can be 255 fractional sample positions defined within the region of an entire sample in Figures 4A and 4B where the subscript coordinate (x, y) represents the fractional position corresponding horizontal and vertical of a sample (for example, the coordinate (0, 0) corresponds to the samples in the entire positions. The horizontal and vertical gradient values can be calculated at the fractional position (1, 1) (for example, a1.1 According to Figures 4A and 4B, for a horizontal gradient derivation, fractional samples f0.1, e0.1, a0.1, b0.1, c0.1 and d0.1 can be derived by applying an interpolation filter hL in the vertical direction, for example, (7) [0065] [0065] The precision of f0.1, e0.1, a0.1, b0.1, c0.1 and d0.1 can be 14 bits. The horizontal gradient of a1.1 can be calculated by applying the corresponding gradient filter hG horizontally to the derived fractional samples. This can be done by calculating the non-rounded gradient values in the 20 intermediate bits, as illustrated by: (9) [0066] [0066] The final horizontal gradient can be calculated by displacing the intermediate gradient value in the output precision as: (10) where e are the functions that return the signal and the absolute value of the input signal; is the rounded offset that can be calculated as. [0067] [0067] When deriving the vertical gradient value at (1, 1), the intermediate vertical gradient values at the fractional position (0, 1) can be derived, for example, (11) [0068] [0068] The intermediate gradient values can then be [0069] [0069] The value of the vertical gradient in the fractional position (1, 1) can be obtained by an interpolation filter hL on the intermediate gradient values in the fractional position (0, 1). This can be done by calculating the non-rounded gradient value in 20 bits, which can then be adjusted to the output bit depth using the shift operation, as shown as: (13) (14) [0070] [0070] As shown in (5), perhaps to derive the refinement of local motion in one position, sample values and gradient values can be calculated for some samples in a window surrounding janela around the sample. The window size can be (2M + 1) x (2M + 1), where M = 2. As described in the present invention, the gradient derivation can access additional reference samples in the extended area of the current block. Bearing in mind that the length T of the interpolation filter and the gradient filter can be 6, the corresponding extended block size can be equal to T-1 = 5. For a given W x H block, the memory access required by the BIO can be (W + T-1 + 2M) x (H + T-1 + 2M) = (W + 9) x (H + 9), the which can be greater than memory access (W + 7) x (H + 7) [0071] [0071] In advanced temporal motion vector prediction (PVMTA), a temporal motion vector prediction can be improved by allowing a block to derive multiple motion information (for example, motion vector and reference indices) for sub- blocks in the block from multiple smaller blocks of temporal neighboring images of the current image. The PVMTA can derive movement information from the sub-blocks in a block by identifying the corresponding block in the current block (which can be called a coalocated block) in a temporal reference image. The selected time reference image can be called the coalocated image. The current block can be divided into sub-blocks in which the movement information of each sub-block from the corresponding small block in the co-located image can be derived, as shown in Figure 6. [0072] [0072] Coalocated blocks and coalocated image can be identified by the movement information of the spatial neighboring blocks of the current block. Figure 6 illustrates a process in which a candidate available on the merge candidate list is considered a process. It can be assumed that block A is identified as the available merge candidate of the current block based on the scan order of the merge candidate list. The corresponding motion vector of block A (for example, MVA and its reference index can be used to identify the coalocated image and block. The location of the coalocated block [0073] [0073] For a sub-block in the current block, the movement information of its corresponding small block (as indicated by the arrows in Figure 6) in the coalocated block can be used to derive the movement information of a sub-block in the current block . When the movement information for each small block in the coalocated block is identified, it can then be converted to the movement vector and reference index of the corresponding sub-block in the current block (for example, in the same way as the vector prediction of temporal movement (PVMT), in which the scaling of the temporal movement vector can be applied). [0074] [0074] In the spatial temporal movement vector (PVMTE) prediction, the movement information of the sub-blocks in a coding block can be derived recursively, and the example of this is illustrated in Figure 7. Figure 7 shows an example to illustrate the concept. As shown in Figure 7, the current block can contain four sub-blocks A B, C and D. The neighboring small blocks that are spatial neighbors of the current block are identified as A, B, C and D, respectively. The motion derivation of sub-block A can identify its two spatial neighbors. The first neighbor of sub-block A can be neighbor C. If, for example, the small block C is not available or intracoded, the next small blocks above the current block (from left to right) can be checked. A second neighbor of sub-block A can be the neighbor of left B. If, for example, the small block B is not available or intracoded, the next small blocks to the left of the current block (from top to bottom) can be checked. The movement information of the temporal neighbors of sub-block A can be obtained following a procedure similar to that of the PVMT process in HEVC. At [0075] [0075] FRUC (Frame-Rate Up Conversion) mode can be supported for intercoded blocks. When this mode is enabled, for example, the movement information (for example, including motion vectors and / or reference indexes) of the coded block may not be signaled. The information can be derived on the decoder side using model matching techniques and / or bilateral matching. Perhaps during the process of deriving motion in the decoder, for example, the list of candidates for merging the block and / or a set of preliminary motion vectors generated from the motion vectors of the coalocated temporal blocks of the current block can be checked. The candidate leading to the sum of the minimum absolute difference (SDA) can be selected as the starting point. A search (for example, location) based on model matching and / or bilateral matching around the starting point can be performed. The MV that results in the minimum SDA can be taken as the MV for the entire block. The motion information can be further refined at a sub-block level for better motion compensation efficiency. [0076] [0076] Figure 8A and Figure 8B illustrate an example of the FRUC process. As shown in Figure 8A, model matching can be used to derive motion information from the current block by finding a (for example, the best) correspondence between a model (for example, neighboring upper and / or left blocks of the current block) in the current image and a block (for example, same size as the model) in a reference image. In Figure 8B, bilateral correspondence can be used [0077] [0077] A translation movement model can be applied to predict movement compensation. There are many types of movement, for example, approach / departure, rotation, perspective movements and other irregular movements. A related transform motion compensation forecast can be applied. As shown in Figure 9A, a motion field related to the block can be described by some (for example, two) control point motion vectors. Based on the control point movement, the field of motion of a related block can be described as: (v1x - v0 x) (v1 y - v0 y) vx = x− y + v0 xww (15) (v1 y - v0 y) (v - v) vy = x + 1x 0 xy + v 0 yww [0078] [0078] Where (v0x, v0y) can be a motion vector from the upper left control point and (v1x, v1y) can be a motion vector from the upper right control point, as shown in Figure 9A . Perhaps, for example, when a video block is similarly encoded, its motion field can be derived based on the granularity of the 4 x 4 block. To derive the motion vector from a 4 x 4 block, the motion vector from the center of each sub-block sample, as shown in Figure 9B, can be calculated according to (15), and can be rounded to 1/16 skin of precision. The vectors of [0079] [0079] SIMD (Single Instruction, Multiple Data) instructions can be used in the software / hardware design of modern video encoders to accelerate the processing speed of SIMD encoding and decoding instructions and can perform the same operation on multiple data elements simultaneously perhaps using a single statement. The SIMD width defines the number of data elements that can be processed in parallel by a record. 128-bit SIMD instructions can be used on general purpose central processing units (CPUs). Graphics processing units (GPUs) can support larger SIMD implementations, for example, supporting arithmetic, logic, loading, storage instructions with 512-bit records. [0080] [0080] As discussed in this document, to reduce the number of filtering operations, a BIO implementation can use separable 2D FIR filters in the gradient derivation process, for example, the combination of a 1D low-pass interpolation filter and a filter 1D high pass gradient. The selection of the corresponding filter coefficients can be based on the fractional position of the target sample. Due to these characteristics (for example, separable 2D filters), some computational operations can be performed in parallel for multiple samples. The gradient derivation process may be suitable for SIMD acceleration. [0081] [0081] Vertical filtering can be applied followed by horizontal filtering for horizontal and vertical gradient derivations. For example, to calculate horizontal gradients, vertical interpolation can be performed using an hL interpolation filter to generate intermediate samples, followed by the hG gradient filter being applied. [0082] [0082] As shown in (10), (12) and (14), rounding operations during the gradient derivation process can be performed by calculating the absolute of the input data, rounding the absolute by adding a shift followed by a shift to the right and multiplying the rounded absolute value with the sign of the input data. [0083] [0083] As discussed in this document, one or more sub-block encoding modes can be used (for example, PVMTA, PVMTE, FRUC and the like). When a sub-block level coding mode is enabled, the current coding block can be further divided into several small sub-blocks and the movement information for each sub-block can be derived separately. Since the motion vectors of the sub-blocks within a coding block can be different, motion compensation can be performed separately for each sub-block. Assuming that the current block is encoded by a sub-block mode, Figure 11 illustrates an example process used to generate the forecast signal of the block using BIO-related operations. As shown in Figure 11, motion vectors can be derived for some (for example, all) sub-blocks of the current block. The regular CM can then be applied to generate the motion compensated forecast signal (for example, Predi) for the sub-block (s) within the block. If, for example, the BIO is used, movement refinement based on the BIO can be performed additionally to obtain the modified PredBIOi forecast signal for the sub-block. This can result in multiple BIO invocations to generate the forecast signal for each sub-block. Interpolation filtering and gradient filtering can access additional reference samples (depending on the length of the filter) for [0084] [0084] The derivation of the BIO forecast can be implemented by an efficient SIMD implementation. [0085] [0085] In an exemplary BIO implementation, the BIO forecast can be derived with Equation (2). In an exemplary BIO implementation, the BIO forecast can include one or more steps (for example, two steps). A step (for example, a first step) can be to derive an adjustment (for example, with Equation (16)) from the state of high precision. A step (for example, a second step) can be to derive the BIO forecast by combining forecasts (for example, two forecasts) from lists (for example, two lists) and adjusting, as seen in Equation ( 17). [0086] [0086] The parameter round1 can be equal to (1 << (shift1-1)) for rounding of 0.5. [0087] [0087] The parameter round2 can be equal to (1 << (shift2-1)) for rounding of 0.5. [0088] [0088] The rounding in Equation (16) can calculate the absolute value and the sign of a variable, and combine the sign and the intermediate result after shifting to the right. Rounding in Equation (16) can use multiple operations. [0089] [0089] BIO gradients can be derived so that a [0090] [0090] Vertical filtering can be followed by horizontal filtering in the BIO gradient derivation process. The width of the middle block may not be well aligned with the common SIMD record lengths. Rounding operations during the gradient derivation process can also be performed based on absolute values, which can introduce expensive computations (for example, calculation and multiplications of absolute) for SIMD implementation. [0091] [0091] As shown in Figures 10A and 10B, vertical filtering can be followed by horizontal filtering in the BIO gradient calculation process. Perhaps, due to the length of the interpolation filters and the gradient filters that can be used, the width of the intermediate data after vertical filtering may be W + 5. Such width may or may not be aligned with the widths of the SIMD records that can be used in practice. [0092] [0092] Horizontal filtering can be performed followed by vertical filtering for horizontal and vertical gradient derivations. To calculate horizontal gradients, the horizontal gradient filter hG can be performed to generate intermediate horizontal gradients based on the fractional horizontal movement fracX, followed by the interpolation filter hL being applied vertically after the intermediate horizontal gradients of [0093] [0093] As discussed in this document, the BIO gradient derivation can round an input value based on its absolute value, which can minimize rounding errors. For example, the absolute of the entry can be calculated by rounding the absolute value, and multiplying the absolute value rounded with the sign of the entry. This rounding can be described as: [0094] [0094] Figure 13 and Figure 14 compare the mapping functions of different rounding methods. As seen in Figure 13 and Figure 14, the difference between the rounded values calculated by the two methods can be small. There may be a difference when the input value of σ_i is perhaps equal to -0.5, -1.5, -2.5, ..., which can be rounded up to integers -1, -2 , -3, ... by the rounding method of Figure 13 and for the integers 0, -1, -2, ... by the rounding method of Figure 14. The impact of the coding performance introduced by the rounding method of Figure 14 can be negligible. As seen in (17), the rounding method in Figure 14 can be completed in a single step and can be implemented by adding and shifting to the right, both of which can be less expensive than calculating absolute values and multiplications. that can be used in (16). [0095] [0095] As discussed in this document, the ordering of a separable 2D filter and / or the use of certain rounding methods can impact the gradient derivation of the BIO. Figures 15A and 15B illustrate an exemplary gradient derivation process in which the ordering of a separable 2D filter and the use of certain rounding methods can impact BIO gradient derivation. For example, when deriving a horizontal gradient, the values of the horizontal gradient in the samples [0096] [0096] The horizontal gradient of a1,1, for example, can be interpolated from those intermediate horizontal gradient values by applying the hL interpolation filter vertically, as illustrated as: (21) [0097] [0097] The vertical gradient can be calculated by interpolating the sample values in the fractional position (1.0) by applying the hL interpolation filter in the horizontal direction, for example, (22) [0098] [0098] The value of the vertical gradient in a1.1 can be obtained by means of the vertical execution of the gradient filter hG on the intermediate fractional positions (1.0), as shown as: (23) [0099] [0099] The bit depth increases caused by the interpolation filter and the gradient filter can be the same (for example, 6 bits as indicated by Table 1 and Table 2). Changing the filtering order may not affect the internal bit depth. [00100] [00100] As discussed in this document, one or more of the encoding tools (for example, PVMTA, PVMTE, FRUC and the like) based on sub-block level motion compensation can be used. If these coding tools are enabled, a coding block can be divided into multiple small sub-blocks (for example, 4x4 blocks) and can derive their own motion information (for example, reference image indexes and motion vectors) that can be used in the motion compensation stage. Motion compensation can be performed separately for each sub-block. Additional reference samples can be obtained to perform motion compensation for each sub-block. Region-based motion compensation based on variable block sizes can be applied to merge contiguous sub-blocks that present the same motion information within the coding block for a motion compensation process. This can decrease the number that the motion compensation process and the BIO process have applied [00101] [00101] The motion compensation forecast can be performed for blocks that are coded by sub-block modes. Variable block size motion compensation can be applied by merging contiguous sub-blocks that have the same motion information into a group of sub-blocks. Single motion compensation can be performed separately for each group of sub-blocks. [00102] [00102] When merging sub-blocks based on a line, adjacent sub-blocks can be merged by locating the same line of sub-blocks within the current coding block that has identical movement for a group and performs a compensation of unique movement for the sub-blocks within the group. Figure 16A shows an example in which the current coding block consists of 16 sub-blocks and each block can be associated with a specific motion vector. Based on the existing sub-block based motion compensation method (as shown in Figure 11), perhaps to generate the forecast signal from the current block, both regular motion compensation and BIO motion refinement can be done to each sub-block separately. Correspondingly, there can be 16 invocations of motion compensation operations (each operation includes regular motion compensation and the BIO). Figure 16B illustrates the subblock motion compensation process after the line based subblock merging scheme is applied. As shown in Figure 16B, after the horizontal merging of the sub-blocks with identical movement, the number of movement compensation operations can be reduced to [00103] [00103] Sub-block merging may be dependent on the format of [00104] [00104] As seen in Figure 16B, the movement of neighboring sub-blocks in a horizontal direction can be considered for merging in the motion compensation stage. For example, sub-blocks in one (for example, the same) row of sub-blocks within the UC can be considered for merging in the motion compensation stage. A (for example, a single) quaternary tree plus binary tree structure ("QTBT" - Quad-Tree plus Binary-Tree) can be applied to partition the blocks into an (for example, a single) image. In the QTBT structure, one (for example, each) encoding tree unit (UAC) can be partitioned using a quaternary tree implementation. One (for example, each) leaf node of the quaternary tree can be partitioned by a binary tree. This separation can occur in the horizontal and / or vertical direction. Coding blocks in a rectangular and / or square format can be used for intracoding and / or intercoding. This may be due to the binary tree partitions. If, for example, a block partitioning scheme is implemented and a line-based subblocking method is applied, the subblocking may have a similar (for example, identical) movement in the horizontal direction. For example, if a rectangular block is oriented vertically (for example, the height of the block is greater than the width of the block), the adjacent sub-blocks located in the same sub-block column may be more correlated than the sub-blocks. blocks that are located in the same row of sub-blocks. In such a case, the sub-blocks can be merged in the vertical direction. [00105] [00105] A subblock merging scheme depending on the block format can be used. For example, if the width of a CU is greater than or equal to its height, a scheme of merging sub-blocks in relation to the row can be used to jointly predict sub-blocks with [00106] [00106] In the line / column based sub-block merging scheme described here, the consistency of movement of neighboring sub-blocks in the horizontal direction and / or in the vertical direction can be considered to merge the sub-blocks in the compensation stage of movement. In practice, the movement information of adjacent sub-blocks can be highly correlated in the vertical direction. For example, as shown in Figure 16A, the motion vectors of the first three sub-blocks in the first row of sub-blocks and the second row of sub-blocks can be the same. In such a case, the consistency of horizontal and vertical movement can be considered when merging sub-blocks for more efficient movement compensation. A 2D sub-block merging scheme can be used, in which the adjacent sub-blocks in the horizontal and vertical directions can be merged into a group of sub-blocks. To calculate the block size for each motion compensation, a progressive search method can be used to merge the sub-blocks horizontally and vertically. Given the position of a sub-block, this can be done: by calculating the maximum number of consecutive sub-blocks in the rows of sub-blocks (for example, each row of sub-blocks) that can be merged into [00107] [00107] The search method described here can be summarized by the following example procedures. Given a position of sub-block bi, j in the i-th row of sub-blocks and j-th column of sub-blocks, the number of consecutive sub-blocks in the i-th row of sub-blocks with the same movement as current sub-block (for example, Ni) can be calculated. The corresponding motion compensation block size Si = Ni and define k = i; you can proceed to the (k + 1) -th row of sub-blocks and calculate the number of consecutive sub-blocks that can be merged (for example, Nk + 1); update Nk + 1 = min (Nk, Nk + 1) and calculate the corresponding motion compensation block size Sk + 1 = Nk + 1 · (k-i + 1); if Sk + 1 ≥ Sk, define Nk + 1 = Nk, k = k + 1, proceed to the (k + 1) -th row of sub-blocks and calculate the number of consecutive sub-blocks that can be merged (for example, Nk + 1); update Nk + 1 = min (Nk, Nk + 1) and calculate the corresponding motion compensation block size Sk + 1 = Nk + 1 · (k-i + 1); otherwise, interrupt. [00108] [00108] Figure 16C illustrates the corresponding sub-block movement compensation process after the 2D sub-block merge scheme is applied. As seen in Figure 16C, the number of motion compensation operations can be 3 (for example, a motion compensation operation for the three groups of sub-blocks). [00109] [00109] As described in the present invention, the merging of sub-blocks can be performed. The block extension constraint can be applied [00110] [00110] Rounding can be performed in the BIO forecast derivation. Rounding can be applied (for example, it can be applied first) to an absolute value. A signal can be applied (for example, it can then be applied after shifting to the right). The rounding in the adjustment derivation for BIO prediction can be applied as shown in Equation (24). The right shift [00111] [00111] The rounding method seen in (24) can perform two rounding operations (for example, round1 in (24) and round2 in (17)) at the original input values. The rounding method seen in (24) can merge the two shift operations to the right (for example, shift1 in (24) and shift2 in (17)) in a single shift to the right. A final forecast generated by the BIO can be seen in Equation (25): (25) where round3 is Equal to (1 << (shift1 + shift2 - 1)). [00112] [00112] A current bit depth to derive the adjustment values (which can be defined as 21 bits, where a bit can be used for the signal) can be greater than an intermediate bipredict bit depth. As shown in (16), a rounding operation (for example, round1) can be applied to an adjustment value (for example,). The intermediate bit depth can be applied to generate bi-prediction signals (for example, which can be set to 14 bits). An offset to the right based on the absolute value (for example, shift1 = [00113] [00113] It should be mentioned that, in addition to being applied separately, the methods described here can be applied in combination. For example, the BIO gradient derivation and subblock motion compensation described here can be combined. The methods described here can be enabled together in the motion compensation stage. [00114] [00114] Overlapping block movement compensation (CMBS) can be performed to remove the blocking artifact at the CM stage. CMBS can be performed for one or more, or all, contours of the block, perhaps for example except the right and / or bottom contours of a block. When a video block is encoded in a subblock mode, a subblock mode can refer to an encoding mode that allows [00115] [00115] Weighted averages can be used in the CMBS to generate the forecast signal for one or more blocks. The forecast signal can be denoted using the motion vector of at least one neighboring sub-block PN and / or the forecast signal using the motion vector of the current sub-block such as PC. When CMBS is applied, the weighted average of the samples in the first / last four rows / columns of PN can be obtained with the samples in the same positions in PC. The samples to which the weighted average is applied can be determined according to the location of the corresponding neighboring sub-block. When the neighboring sub-block is a neighbor above (for example, sub-block b in Figure 19), for example, the samples in the first X rows of the current sub-block can be adjusted. When the neighboring sub-block is a neighbor below (for example, sub-block d in Figure 19), for example, the samples in the last X rows of the current sub-block can be adjusted. When the neighboring sub-block is a neighbor on the left (for example, sub-block a in Figure 19), for example, the samples in the first X columns of the current block can be adjusted. Perhaps, when the neighboring sub-block is a neighbor on the right (for example, sub-block c in Figure 19), for example, the samples in the last X columns of the current sub-block can be adjusted. [00116] [00116] The values of X and / or weight can be determined based on the encoding mode that is used to encode the current block. For example, when the current block is not coded in a sub-block mode, weighting factors {1/4, 1/8, 1/16, 1/32} can be used for at least the first four rows / columns PN and / or weighting factors {3/4, 7/8, 15/16, 31/32} can be used for the first four rows / columns of PC. For example, when the current block is coded in subblock mode, the average of the first two rows / columns of PN and PC can be obtained. In such scenarios, among others, weighting factors {1/4, 1/8} can be used for PN and / or weighting factors {3/4, 7/8} can be used for PC. [00117] [00117] As described in the present invention, BIO can be considered an enhancement of regular CM by improving the granularity and / or the accuracy of the motion vectors that are used in the CM stage. Assuming that UC contains multiple sub-blocks, Figure 20 illustrates an example of the process for generating the forecast signal for UC using BIO-related operations. As shown in Figure 20, motion vectors can be derived for one or more, or all, sub-blocks of the current UC. The CM can be applied to generate the motion compensated forecast signal (for example, Predi) for one or more, or each, sub-block within the UC. Perhaps, if BIO is used, for example, motion refinement based on BIO can be performed to obtain the modified PredBIOi forecast signal for the sub-block. When CMBS is used, for example, it can be performed for one or more, or each, sub-block of the UC following the same procedure as described here to generate the corresponding CMBS forecast signal. In some scenarios, the motion vectors of neighboring spatial sub-blocks (for example, perhaps instead of the current sub-block motion vector) can be used to derive the forecast signal. [00118] [00118] In Figure 20, for example, when at least one sub-block is expected, the BIO can be used in the regular CM stage and / or in the CMBS stage. The BIO can be invoked to generate the forecast signal for the sub-block. Figure 21 shows an example flowchart of a CMBS forecast generation process, which can be performed without the BIO. In the figure. 21, perhaps for example at the regular CM stage, the motion compensated forecast can still be followed by the BIO. As described in the present invention, the derivation of BIO-based motion refinement can be a sample-based operation. [00119] [00119] BIO can be applied in the regular CM process for a current UC encoded with a sub-block mode (for example, FRUC, affine mode, PVMTA and / or PVMTE). For UCs encoded by one or more, or any of these, sub-block modes, the UC can further be divided into one or more, or multiple, sub-blocks and one or more, or each, sub-block can be assigned to one or more unique motion vectors (for example, uniprevision and / or biprevision). Perhaps for example when the BIO is enabled, the decision to apply or not the BIO and / or the operation of the BIO itself can be carried out separately for one or more, or each, of the sub-blocks. [00120] [00120] As described here, one or more techniques are contemplated to ignore BIO operations in the CM stage (for example, the regular CM stage). For example, the BIO core design (for example, calculating gradients and / or refined motion vectors) can be kept the same and / or substantially similar. In one or more techniques, BIO operation can be (for example, partially or completely) disabled for blocks / sub-blocks in which one or more factors or conditions can be satisfied. In some cases, CM can be performed without BIO. [00121] [00121] In a sub-block mode, a separate UC can be divided into more than one sub-block and / or one or more different vectors of [00122] [00122] As described here, the BIO can compensate for the movement (for example, small) that can remain between (for example, at least) two forecast blocks generated by the CM based on conventional block. As shown in Figure 8A and Figure 8B, bilateral FRUC correspondence can be used to estimate the motion vectors based on the time symmetry along the movement path between the forecast blocks in the forward and / or forward reference images. back. For example, the value of the motion vectors associated with the two forecast blocks can be proportional to the time distance between the current image and its respective reference image. The movement estimate based on bilateral correspondence can provide one or more movement vectors (for example, reliable), perhaps for example when there may be (for example, only) a small translational movement between two reference blocks (for example, the encoding blocks in the highest temporal layers in the random access configuration). [00123] [00123] For example, when at least one sub-block is coded by the bilateral FRUC mode, among other scenarios, the one or more true motion vectors of the samples within the sub-block may be (for example, must be) coherent. In one or more techniques, the BIO can be disabled during the regular CM process for the one or more sub-blocks that are encoded by the bilateral FRUC mode. Figure 22 shows an example diagram for a forecast generation process after the [00124] [00124] As described here, the BIO can be ignored for the one or more bilateral FRUC sub-blocks for which the two motion vectors can be (for example, can always be) symmetric in the time domain. Perhaps to achieve further reductions in complexity, among other scenarios, the BIO process can be ignored at the CM stage based on the difference (for example, absolute) between the at least two motion vectors of at least one predicted sub-block. For example, for one or more sub-blocks that can be predicted by means of two motion vectors that can be approximately proportional in the time domain, it may be reasonable to assume that the two forecast blocks are highly correlated and / or the motion vectors that are used in CM at the sub-block level may be sufficient to accurately reflect the true movement between the forecast blocks. In one or more scenarios, the BIO process can be ignored for these sub-blocks. For scenarios in which bi-predicted sub-blocks whose motion vectors may not be (for example, may be far from being) temporally proportional, the BIO can be executed after predictions with motion compensation at the sub-block level. [00125] [00125] Using the same notations as in Figure 3, for example, (MVx0, MVy0) and / or (MVx1, MVy1) denote the motion vectors at the sub-block level (for example, forecast signal) that can be used to generate the two forecast blocks. In addition, and denote the temporal distances of the temporal reference images forward and / or backward to the current image. In addition, (MVsx1, MVsy1) can be calculated as the scaled version of (MVx1, MVy1), which can be generated based on and [00126] [00126] Based on (28), perhaps when one or more contemplated techniques can be applied, the BIO process can be ignored for at least one block and / or sub-block. For example, based on (28), two forecast blocks (for example, reference blocks) can be determined to be similar or different (for example, determined based on a difference in forecast). If the two forecast blocks (for example, reference blocks) are similar, the BIO process can be ignored for at least one block or sub-block. If the two forecast blocks (for example, reference blocks) are different, the BIO process may not be ignored for at least one block or sub-block. For example, two forecast blocks can be determined to be similar when the following condition is met: (29) [00127] [00127] The thres variable can indicate a predefined / predetermined limit of a motion vector difference. Otherwise, the motion vectors that can be used for CM at the subblock level of the current subblock can be considered inaccurate. In such scenarios, among others, the BIO can also be applied to the sub-block. For example, the thres variable can be flagged and / or determined (for example, determined by a decoder) based on a desired coding performance. [00128] [00128] Figure 23 illustrates an exemplary forecast generation process in which the BIO can be deactivated based on the motion vector difference criteria of (29). As can be seen in (29), the limit of a motion vector difference (for example, thres) can be used to determine whether at least one block or sub-block can ignore the [00129] [00129] As shown in (28) and (29), the scaling of the motion vector can be applied to (MVx1, MVy1) to calculate the difference of the motion vector. For example, when the dimensioning of the motion vector at (28) is applied and / or it is assumed that, the error incurred by the motion estimate of (MVx1, MVy1) can be amplified. The scaling of the motion vector can be applied (for example, it can always be applied) to the motion vector that is associated with the reference image that has a relatively large time distance from the current image. For example, when, the motion vector (MVx1, MVy1) can be scaled to calculate the difference of the motion vector (for example, as shown by (28) and (29)). Otherwise, when, the motion vector (MVx0, MVy0) can be scaled to calculate the difference of the motion vector, as indicated as. [00130] [00130] As described in the present invention, the scaling of the motion vector can be applied when the temporal distances of the two reference images to the current image are different (for example,). Since the scaling of the motion vector can introduce additional errors (for example, due to splitting and / or rounding operations), this could influence (for example, reduce) the accuracy of the scaled motion vector. In order to avoid or reduce such errors, among other scenarios, the difference of the motion vector (for example, as indicated in (30) and (31)) can be used (for example, it can only be used) to disable the BIO in the CM stage, perhaps for example when (for example, only when) the time distance of at least two, or even two, reference images (for example, reference blocks) of the current sub-block can be the same or substantially similar (for example, ). [00131] [00131] As described here, the motion vector difference can be used as the measurement to determine whether the BIO process can be ignored for at least one sub-block in the CM stage (for example, based on the two blocks are similar or not). When the difference of the motion vector between two reference blocks is relatively small (for example, below a limit), it may be reasonable to assume that the two forecast blocks can be similar (for example, highly correlated), so that the BIO can be disabled without incurring loss of coding (for example, substantial). The difference in the motion vector can be one of a number of ways to measure the similarity (for example, correlation) between two forecast blocks (for example, reference blocks). In one or more techniques, the correlation between two forecast blocks can be determined by calculating a [00132] [00132] The variables e are the sample values in the coordinate (x, y) of the blocks with motion compensation derived from the forward and / or backward reference images (for example, reference blocks). The sample values can be associated with those luma values of the respective reference blocks. The sample values can be interpolated from their respective reference blocks. The variables B and N are the set of sample coordinates and the number of samples as defined in the current block or sub-block, respectively. Variable D is the distortion measurement to which one or more different measurements / metrics can be applied, such as: a square error sum (SEQ), an absolute difference sum (SDA) and / or an absolute transformed difference sum ( SDTA). Given (32), the BIO could be ignored in the CM stage, perhaps for example when the difference measurement may not be greater than one or more predefined / predetermined limits, for example,. Otherwise, the two forecast signals from the current sub-block can be considered as being different (for example, less correlated), to which the BIO can be (for example, still be applied). As described here, it can be signaled or determined by the decoder (for example, based on the desired encoding performance). Figure 24 illustrates an example of a forecast generation process after the BIO is ignored based on measuring the difference between two forecast signals. [00133] [00133] As described here, the BIO process can be conditionally ignored at the UC or sub-UC level (for example, in sub-blocks with the UC). For example, an encoder and / or decoder can determine whether the BIO can be ignored for a current UC. As described [00134] [00134] As shown in Figure 24, the BIO process can be conditionally ignored for the current UC or the sub-blocks within the current UC whose distortion between its two forecast signals may not be greater than a limit. The calculation of the distortion measurement and BIO process can be performed based on the sub-block and can be invoked frequently for the sub-blocks in the current UC. Distortion measurement at the UC level can be performed to determine whether the BIO process for the current UC should be ignored. The early termination of multiple stages can be performed, in which the BIO process can be ignored based on the distortion values calculated from different block levels. [00135] [00135] The distortion can be calculated considering some (for example, all) samples within the current CU. For example, if the distortion at the UC level is small enough (for example, no greater than a threshold at the predefined UC level), the BIO process can be ignored for the UC; otherwise, the distortion for each sub-block within the current UC can be calculated and used to determine whether the BIO process can be ignored at the sub-block level. Figure 26 illustrates an exemplary forecasting process with motion compensation with an anticipated multistage ending being applied to the BIO. The variables in Figure 26, and represent forecast signals generated for the current UC from the reference image list L0 and L1, and e represent [00136] [00136] As described in the present invention, the distortion at the UC level can be calculated to determine whether, for example, the BIO operation can be disabled or not for the current UC. The movement information of the sub-blocks within the UC may or may not be highly correlated. Sub-block distortions within the UC may vary. A multi-stage early termination can be performed. For example, a UC can be divided into multiple groups of sub-blocks, and a group can include contiguous sub-blocks that have similar (for example, equal) motion information (for example, the same reference image indices and motion vectors). The distortion measurement can be calculated for each sub-block. If, for example, the distortion of a group of sub-blocks is small enough (for example, no greater than a predefined limit), the BIO process can be ignored for samples within the group of sub-blocks; otherwise, the distortion for each sub-block within the group of sub-blocks can be calculated and used to determine whether the BIO process can be ignored for the sub-block. [00137] [00137] In (32), and can refer to the values (for example, the luma values) of the samples with motion compensation in the coordinate (x, y) obtained from the reference image lists L0 and L1. The values of the motion-compensated samples can be set to the precision of the bit depth of the input signal (for example, 8 bits or 10 bits if the input signal is 8-bit or 10-bit video). The prediction signal of a predictable block can be generated by averaging the two forecast signals L0 and L1 in the accuracy of the input bit depth. [00138] [00138] If, for example, MVs point to sample positions [00139] [00139] Given the biprevision signals generated at high bit depth, the corresponding distortion between the two forecast blocks in Equation (32) can be calculated at intermediate precision, as specified as: (33) where and are the sample values the high precision in the (x, y) coordinate of the forecast blocks generated from L0 and [00140] [00140] The variables and are the distortion limits at the UC level in the precision of the input bit depth; and are the corresponding distortion limits in the accuracy of the intermediate bit depth. [00141] [00141] As described here, the BIO can provide movement refinement in relation to the sample, which can be calculated based on local gradient information in one or more, or each, sample location in at least one block with motion compensation . For sub-blocks within a region that may contain less high-frequency detail (for example, the flat area), the gradients that can be derived using the gradient filters in Table 1 may tend to be small. As shown in Equation (4), when the local gradients (for example, e) are close to zero, the final forecast signal obtained from the BIO can be approximately equal to the forecast signal generated by conventional biprediction, for example,. [00142] [00142] In one or more techniques, the BIO can be applied (for [00143] [00143] The one or more techniques described here individually that can ignore the BIO in the CM stage, can be applied together (for example, more than one technique can be combined, etc.). The one or more techniques, limits, equations and / or factors / conditions, etc., described here can be freely combined. One or more combinations other than [00144] [00144] Figure 28A is a diagram illustrating an exemplary communications system 100 in which one or more revealed modalities can be implemented. Communications system 100 can be a multiple access system that provides content, such as voice, data, video, messages, broadcasting, etc., to multiple wireless users. Communications system 100 can enable multiple wireless users to access this content by sharing system resources, including wireless bandwidth. For example, communications systems 100 may employ one or more channel access methods, such as code division multiple access ("CDMA"), time division multiple access ("TDMA" - time division multiple access), frequency division multiple access ("FDMA" - frequency division multiple access), orthogonal FDMA ("OFDMA" - orthogonal frequency division multiple access), single carrier FDMA ("SC-FDMA" - single-carrier frequency division multiple access), discrete Fourier transform scattering OFDM ("DFT" - discrete Fourier transform) single word "zero tail" ("ZT UW DTS-s OFDM" - zero-tail unique-word discrete sine transform spread orthogonal frequency division multiplexing), OFDM filtered by single word ("UW-OFDM" - unique word orthogonal frequency division multiplexing), OFDM filtered by resource block, filter bank multicarrier ("FBMC" - filter bank multicarrier) and the like. [00145] [00145] As shown in Figure 28A, communications system 100 may include wireless transmit / receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a CN 106/115, a public telephone network ("PSTN" - public switched telephone network ") 108, the Internet 110 and other networks 112, although it should be considered that the revealed modalities include any number of WTRUs, base stations, networks and / or network elements. one of the WTRUs 102a, 102b, 102c, 102d can be any type of device configured to operate and / or communicate in a wireless environment, for example, the WTRUs 102a, 102b, 102c, 102d, any of which can be called a "station" and / or a "STA", can be configured to transmit and / or receive wireless signals and can include user equipment ("UE" - user equipment), a mobile station, a unit fixed or mobile subscriber, a subscription-based unit, a pager, a cell phone, an attendant personal digital assistant ("PDA" - personal digital assistant), a smart phone, a laptop computer, a netbook computer, a personal computer, a wireless sensor, an access point or Mi-Fi device, a Internet of things device ("IoT" - Internet of things), a wristwatch or other device to be worn close to the body, a virtual reality helmet ("HMD" - head-mounted display), a vehicle, a drone , a medical device and applications (for example, remote surgery), an industrial device and applications (for example, a robot and / or other wireless devices operating in an industrial and / or automated processing chain context), an electronic device consumer, a device that operates on commercial and / or industrial wireless networks and the like. Any of the WTRUs 102a, 102b, 102c and 102d can be called interchangeably UE. [00146] [00146] Communication systems 100 may also include a base station 114a and / or a base station 114b. Each of the base stations [00147] [00147] Base station 114a may be part of RAN 104/113, which may also include other base stations and / or network elements (not shown), such as a base station controller ("BSC" - base station controller), a radio network controller ("RNC" - radio network controller), relay nodes, etc. Base station 114a and / or base station 114b can be configured to transmit and / or receive wireless signals on one or more carrier frequencies, which can be called a cell (not shown). These frequencies can be in licensed spectrum, unlicensed spectrum or a combination of licensed and unlicensed spectrum. A cell can provide wireless coverage for a specific geographic area that can be relatively fixed or that can change over time. The cell can also be divided into cell sectors. For example, the cell associated with base station 114a can be divided into three sectors. Thus, in one embodiment, base station 114a can include three transceivers, that is, one for each cell sector. In one embodiment, base station 114a can employ multiple input and multiple output technology ("MIMO" - multiple input multiple output) and can use multiple [00148] [00148] Base stations 114a, 114b can communicate with one or more of the WTRUs 102a, 102b, 102c, 102d via an air interface 116, which can be any suitable wireless communication link (for example, radio frequency ( "RF" - radio frequency), microwave, centimeter wave, micrometric wave, infrared (IR - "Infrared"), ultraviolet ("UV" - ultraviolet), visible light, etc.). The air interface 116 can be established using any suitable radio access technology ("RAT" - radio access technology). [00149] [00149] More specifically, as indicated above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA and the like. For example, base station 114a on RAN 104/113 and WTRUs 102a, 102b, 102c can implement radio technology, such as universal terrestrial radio access ("UTRA" - universal terrestrial radio access) from the universal telecommunications system mobile ("UMTS" - universal mobile telecommunications system), which can establish the aerial interface 115/116/117 using broadband CDMA ("WCDMA" - wideband code division multiple access). WCDMA can include communication protocols, such as high-speed packet access ("HSPA") and / or advanced HSPA (HSPA +). HSPA may include high-speed downlink packet access ("DL" - downlink packet access) and / or high-speed downlink packet access ("UL" - uplink) speed ("HSUPA" - high-speed uplink packet access). [00150] [00150] In one embodiment, base station 114a and WTRUs 102a, 102b, 102c can implement radio technology, such as access [00151] [00151] In one embodiment, base station 114a and WTRUs 102a, 102b, 102c can implement radio technology, such as NR radio access, which can establish air interface 116 using Novo Rádio (NR, new radio). [00152] [00152] In one embodiment, base station 114a and WTRUs 102a, 102b, 102c can implement multiple radio access technologies. For example, base station 114a and WTRUs 102a, 102b, 102c can implement LTE radio access and NR radio access together, for example, using dual connectivity ("DC") principles. Thus, the air interface used by WTRUs 102a, 102b, 102c can be characterized by multiple types of radio access technologies and / or transmissions sent to / from multiple types of base stations (for example, an eNB and a gNB). [00153] [00153] In other modalities, base station 114a and WTRUs 102a, 102b, 102c can implement radio technologies, such as IEEE [00154] [00154] Base station 114b in Figure 28A can be a wireless router, a source B node, a source eNodeB, or a connection point, for example [00155] [00155] RAN 104/113 can be in communication with CN 106/115, which can be any type of network configured to provide voice, data, applications and / or voice over Internet protocol ("VoIP" - voice) over Internet protocol) for one or more of the WTRUs 102a, 102b, 102c, 102d. Data can have varying quality of service ("QoS") requirements, such as different processing capacity requirements, latency requirements, error tolerance requirements, reliability requirements, data processing capacity requirements , mobility requirements and the like. CN 106/115 can provide call control, billing services, location-based mobile services, prepaid calling, Internet connectivity, video distribution, etc., and / or perform high-level security functions, such as authentication user. Although not shown in Figure 28A, it must be [00156] [00156] CN 106/115 can also serve as a communication port ("gateway") for WTRUs 102a, 102b, 102c, 102d to access PSTN 108, Internet 110 and / or other networks 112. PSTN 108 may include circuit switched telephone networks that provide conventional telephone service ("POTS" - plain old telephone service). Internet 110 can include a global system of computer networks and interconnected devices that use common communication protocols, such as the transmission control protocol ("TCP"), the user datagram protocol ("UDP" - user datagram protocol) and the Internet protocol ("IP" - Internet protocol) in the set of Internet TCP / IP protocols. 112 networks may include wired and / or wireless communications networks owned by, and / or operated by, other service providers. For example, networks 112 may include another CN connected to one or more RANs, which may employ the same RAT, such as RAN 104/113, or a different RAT. [00157] [00157] Some or all WTRUs 102a, 102b, 102c, 102d in communications system 100 may include multiple mode capabilities (for example, WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communication with different wireless networks through different wireless links). For example, WTRU 102c shown in Figure 28A can be configured to communicate with base station 114a, which can employ radio technology based on a cellular network, and with [00158] [00158] Figure 28B is a system diagram illustrating an example of WTRU 102. As shown in Figure 28B, WTRU 102 can include a processor 118, a transceiver 120, a transmit / receive element 122, a speaker / microphone 124, a numeric keypad 126, a monitor / touchpad 128, a non-removable memory 130, a removable memory 132, a power source 134, a global positioning system (GPS) chipset 136 and / or other peripherals 138, among others. It will be recognized that WTRU 102 may include any subcombination of the above elements while remaining consistent with a modality. [00159] [00159] Processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor ("DSP" - digital signal processor), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, application specific integrated circuits ("ASICs"), field programmable gate array circuits ("FPGAs" - field programmable gate arrays), any other type integrated circuit (IC) - a state machine and the like. Processor 118 can perform signal encoding, data processing, power control, input / output processing and / or any other functionality that enables the WTRU 102 to operate in a wireless environment. Processor 118 can be coupled to transceiver 120, which can be coupled to transmit / receive element 122. Although Figure 28B represents processor 118 and transceiver 120 as separate components, it will be recognized that processor 118 and transceiver 120 can be integrated together in an electronic package or electronic circuit. [00160] [00160] The transmit / receive element 122 can be configured [00161] [00161] Although the transmit / receive element 122 is represented in Figure 28B as a single element, WTRU 102 can include any number of transmit / receive elements 122. More specifically, WTRU 102 can employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit / receive elements 122 (e.g., multiple antennas) to transmit and receive wireless signals via the air interface 116. [00162] [00162] Transceiver 120 can be configured to modulate the signals that are intended to be transmitted by the transmit / receive element 122, and to demodulate the signals that are received by the transmit / receive element 122. As indicated above, WTRU 102 may have multimode capabilities. In this way, transceiver 120 can include multiple transceivers to enable WTRU 102 to communicate through multiple RATs, such as NR and IEEE 802.11, for example. [00163] [00163] Processor 118 from WTRU 102 can be coupled to speaker / microphone 124, key pad 126 and / or monitor / touchpad 128 (for example, a liquid crystal display unit (LCD - Liquid Crystal Display) or a LED display unit [00164] [00164] Processor 118 can receive power from power source 134, and can be configured to distribute and / or control power to the other components in WTRU 102. Power source 134 can be any device suitable for powering WTRU 102 For example, power source 134 may include one or more dry cell batteries (for example, nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel-metal hydride (NiMH), lithium ion (Li-ion) Ion), etc.), solar cells, fuel cells, and the like. [00165] [00165] Processor 118 can also be coupled to the electronic circuitry of GPS 136, which can be configured to provide location information (for example, longitude and latitude) in relation to the current location of WTRU 102. In addition to, or instead of the electronic circuitry information from the GPS 136, the WTRU 102 can receive location information via the [00166] [00166] Processor 118 may also be coupled to other peripherals 138, which may include one or more software and / or hardware modules that provide additional wireless, wired features and functionality, and / or connectivity. For example, peripherals 138 may include an accelerometer, an electronic compass, a satellite transceiver, a digital camera (for photos and / or video), a universal serial bus port ("USB" - universal serial bus), a device vibrator, a television transceiver, a hands-free headset, a Bluetooth® module, a frequency modulated radio unit ("FM" - frequency modulated), a digital music player, a media player, a music player module video game, an Internet browser, a virtual reality device and / or augmented reality ("VR / AR" - virtual reality / augmented reality), an activity tracker and the like. Peripherals 138 may include one or more sensors, the sensors may be one or more from a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a sensor of time; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor and / or a humidity sensor. [00167] [00167] The WTRU 102 may include a full duplex radio for which the transmission and reception of some or all of the signals (for example, associated with specific subframes for UL (for example, for transmission) and downlink (for example , for reception)) can be concomitant and / or simultaneous. The full duplex radio can include a [00168] [00168] Figure 28C is a system diagram that illustrates RAN 104 and CN 106 according to a modality. As noted above, RAN 104 can employ E-UTRA radio technology to communicate with WTRUs 102a, 102b, 102c via air interface 116. RAN 104 can also be in communication with CN 106. [00169] [00169] RAN 104 can include eNodeBs 160a, 160b, 160c, although it should be considered that RAN 104 can include any number of eNodeBs and still remain consistent with a modality. Each of the eNodeBs 160a, 160b, 160c can include one or more transceivers for communication with WTRUs 102a, 102b, 102c via the air interface [00170] [00170] Each of the eNodeBs 160a, 160b, 160c can be associated with a specific cell (not shown) and can be configured to handle radio resource management decisions, automatic change decisions, UL user scheduling and / or DL and the like. As shown in Figure 28C, eNodeBs 160a, 160b, 160c can communicate with each other via an X2 interface. [00171] [00171] CN 106 shown in Figure 28C can include an entity [00172] [00172] MME 162 can be connected to each of the eNodeBs 162a, 162b, 162c on RAN 104 through an S1 interface and can serve as a control node. For example, MME 162 may be responsible for authenticating users of WTRUs 102a, 102b, 102c, for enabling / disabling the carrier, for selecting a specific server communication port during an initial connection for WTRUs 102a, 102b, 102c and the like . MME 162 can provide a control plan function for switching between RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA. [00173] [00173] SGW 164 can be connected to each of the eNodeBs 160a, 160b, 160c on RAN 104 through the S1 interface. The SWH 164 can, in general, route and forward user data packets destined for / from WTRUs 102a, 102b, 102c. The SGW 164 can perform other functions, such as anchoring user plans during automatic changes between eNodeBs, triggering pagination when DL data is available for WTRUs 102a, 102b, 102c, managing and storing the contexts of WTRUs 102a, 102b, 102c and the like. [00174] [00174] SGW 164 can be connected to PGW 166, which can provide WTRUs 102a, 102b, 102c with access to packet-switched networks, such as Internet 110, to facilitate communications between WTRUs 102a, 102b, 102c and IP-enabled devices. [00175] [00175] CN 106 can facilitate communication with other networks. Per [00176] [00176] Although the WTRU is described in Figures 29A to 29D as a wireless terminal, it is contemplated that, in certain representative embodiments, such terminal may use (for example, temporarily or permanently) wired communication interfaces with the communication network . [00177] [00177] In representative modalities, the other network 112 can be a WLAN. [00178] [00178] A WLAN in basic service set mode ("BSS" - basic service set) can have a connection point ("AP" - access point) for the BSS and one or more stations ("STAs" - stations) associated with the AP. The AP can have access to or interface with a distribution system ("DS" - distribution system) or another type of wired / wireless network that carries traffic into and / or out of the BSS. Traffic to STAs that originate outside a BSS can arrive through the AP and can be delivered to STAs. Traffic from STAs to destinations outside the BSS can be sent to the AP to be delivered to the respective destinations. Traffic between STAs within the BSS can be sent through the AP, for example, where the originating STA can send traffic to the AP and the AP can deliver traffic to the destination STA. Traffic between STAs within a [00179] [00179] When using the operation mode or a similar operation mode of 802.11ac infrastructure, the AP can transmit a flag on a fixed channel, such as a primary channel. The primary channel can have a fixed width (for example, 20 MHz of bandwidth) or a dynamically defined width through signaling. The primary channel can be the operational channel of the BSS and can be used by the STAs to establish a connection with the AP. In certain representative modalities, multiple access with collision avoidance carrier detection ("CSMA / CA" - carrier sense multiple access with collision avoidance) can be implemented, for example, in 802.11 systems. For CSMA / CA, STAs (for example, each STA), including the AP, can detect the primary channel. If the primary channel is detected and / or determined / detected to be occupied by a particular STA, the specific STA can retreat. A STA (for example, a single station) can transmit at any given moment in a given BSS. [00180] [00180] High throughput STAs ("HT" - high throughput) can use a 40 MHz wide channel for communication, for example, by combining the primary 20 MHz channel with an adjacent 20 MHz channel or not adjacent to form a 40 channel [00181] [00181] STAs with very high processing capacity ("VHT" - very high throughput) STAs can support channels of 20 MHz, 40 MHz, 80 MHz and / or 160 MHz in width. The 40 MHz and / or 80 MHz channels can be formed, for example, by combining contiguous 20 MHz channels. A 160 MHz channel can be formed, for example, by combining eight contiguous 20 MHz channels or by combining two 80 MHz non-contiguous channels, which can be called an 80 + 80 configuration. For the 80 + 80 configuration, the data, after encoding the channel, can be passed through a segment analyzer that can divide the data into two streams. The processing of the fast inverse Fourier transform ("IFFT" - inverse fast Fourier transform) and time domain processing can be performed, for example, on each stream separately. The flows can be mapped to the two 80 MHz channels, and the data can be transmitted by a transmitting STA. At the receiving STA receiver, the operation described above for the 80 + 80 configuration can be reversed, and the combined data can be sent to the medium access control (MAC). [00182] [00182] Sub 1 GHz operating modes are supported by [00183] [00183] WLAN systems, which can support multiple channels and channel bandwidths, such as 802.11n, 802.11ac, 802.11af and 802.11ah, include a channel that can be designated as the primary channel. The primary channel can, for example, have a bandwidth equal to the highest common operating bandwidth supported by all STAs in the BSS. The bandwidth of the primary channel can be defined and / or limited by one STA, among all STAs operating in a BSS, which supports the lowest bandwidth operating mode. In the 802.11ah example, the primary channel can be 1 MHz wide for STAs (for example, MTC type devices) that support (for example, only support) a 1 MHz mode, even if the AP, and other STAs in BSS mode support the 2 MHz, 4 MHz, 8 MHz, 16 MHz and / or other channels bandwidth operating modes. The settings for carrier detection and / or Network Allocation Vector ("NAV" - network allocation vector) may depend on the state of the primary channel. If the primary channel is busy, for example, due to a STA (which only supports a 1 MHz operating mode), transmitting to the AP, all available frequency bands can be considered occupied even if most frequency bands remains idle and can be available. [00184] [00184] In the United States, the available frequency bands, which can be used over 802.11ah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. Japan, the available frequency bands are 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11ah is 6 MHz to 26 MHz, depending on the country code. [00185] [00185] Figure 28D is a system diagram that illustrates RAN 113 and CN 115 according to a modality. As noted above, RAN 113 can employ NR radio technology to communicate with WTRUs 102a, 102b, 102c via air interface 116. RAN 113 can also be in communication with CN 115. [00186] [00186] RAN 113 can include gNBs 180a, 180b, 180c, although it should be considered that RAN 113 can include any number of gNBs and still remain consistent with a modality. The gNBs 180a, 180b, 180c can include one or more transceivers for communication with WTRUs 102a, 102b, 102c through the air interface 116. In some embodiments, gNBs 180a, 180b, 180c can implement MIMO technology. For example, gNBs 180a, 108b can use beam formation to transmit signals to and / or receive signals from gNBs 180a, 180b, 180C. In this way, gNBs 180a, for example, can use multiple antennas to transmit wireless signals and / or receive wireless signals from WTRU 102a. In one embodiment, gNBs 180a, 180b and 180c can implement carrier aggregation technology. For example, gNB 180a can transmit multi-component carriers to WTRU 102a (not shown). A subset of these component carriers may be in the unlicensed spectrum while the remaining component carriers may be in the licensed spectrum. In one embodiment, gNBs 180a, 180b and 180c can implement coordinated multi-point technology ("CoMP" - coordinated multi-point). For example, WTRU 102a can receive coordinated transmissions of gNB 180a and gNB 180b (and / or gNB 180C). [00187] [00187] WTRUs 102a, 102b, 102c can communicate with gNBs 180a, 180b, 180c using transmissions associated with scalable numerology. For example, the OFDM symbol spacing and / or OFDM subcarrier spacing may vary for different transmissions, different cells, and / or different portions of the wireless transmission spectrum. At [00188] [00188] The gNBs 180a, 180b and 180c can be configured to communicate with WTRUs 102a, 102b, 102c in an autonomous configuration and / or a non-autonomous configuration. In the standalone configuration, WTRUs 102a, 102b, 102c can communicate with gNBs 180a, 180b, 180c without also accessing other RANs (for example, such as eNode-Bs 160a, 160b, 160c). In the standalone configuration, WTRUs 102a, 102b, 102c can use one or more of the gNBs 180a, 180b, 180c as a mobility anchor point. In the standalone configuration, WTRUs 102a, 102b, 102c can communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band. In a non-autonomous configuration, WTRUs 102a, 102b, 102c can communicate with / connect with gNBs 180a, 180b, 180c while also communicating with / connecting to another RAN such as eNode-Bs 160a, 160b, 160C. For example, WTRUs 102a, 102b, 102c can implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously. In the non-autonomous configuration, eNode-Bs 160a, 160b, 160c can serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c can provide additional coverage and / or processing power for maintenance of WTRUs 102a , 102b, 102c. [00189] [00189] Each of the gNBs 180a, 180b, 180c can be associated with a particular cell (not shown) and can be configured to support radio resource management decisions, delivery decisions, UL user scheduling and / or DL, slicing network support, dual connections, number interconnection and E-UTRA, data routing [00190] [00190] CN 115 shown in Figure 28D can include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one session management function ("SMF" - 183a, 183b and possibly a data network ("DN" - data network) 185a, 185b. Although each of the aforementioned elements is shown as part of CN 115, it must be considered that any of these elements may belong to and / or be operated by an entity other than the operator of the CN. [00191] [00191] AMF 182a, 182b can be connected to each of the gNBs 180a, 180b, 180c on RAN 113 via an N2 interface, and can serve as a control node. For example, AMF 182a, 182b may be responsible for authenticating users of WTRUs 102a, 102b, 102c, support for network splitting (for example, handling different protocol data unit sessions ("PDU") ) with different requirements), selection of a specific SMF 183a, 183b, management of the registration area, termination of network-attached storage ("NAS") signaling, mobility management and the like. The network split can be used by AMF 182a, 182b to customize CN support for WTRUs 102a, 102b, 102c based on the types of services that are used by WTRUs 102a, 102b, 102c. For example, different network slices can be established for different use cases such as services that rely on ultra reliable low latency access ("URLLC" - services that rely on mass mobile broadband access (" eMBB "- enhanced [00192] [00192] SMF 183a, 183b can be connected to an AMF 182a, 182b on CN 115 via an N11 interface. SMF 183a, 183b can also be connected to an UPF 184a, 184b on CN 115 via an N4 interface. SMF 183a, 183b can select and control UPF 184a, 184b and configure traffic routing through UPF 184a, 184b. SMF 183a and 183b can perform other functions, such as managing and allocating the UE IP address, managing PDU sessions, controlling the application of policies and QoS, providing notifications of downlink data and the like. A type of PDU session can be IP based, non-IP based, Ethernet based and the like. [00193] [00193] UPF 184a, 184b can be connected to one or more of the gNBs 180a, 180b, 180c on RAN 113 through an N3 interface, which can provide WTRUs 102a, 102b, 102c with access to packet-switched networks, as Internet 110, to facilitate communications between WTRUs 102a, 102b, 102c and IP-enabled devices. UPF 184 and 184b can perform other functions, such as packet routing and forwarding, application of user plan policies, support for multi-base PDU sessions, handling of user plan QoS, temporary storage of downlink packets, provisioning anchoring systems and the like. [00194] [00194] CN 115 can facilitate communication with other networks. For example, CN 115 can include, or can communicate with, an IP communication port (for example, an IP multimedia subsystem (IMS) server) that serves as an interface between CN 115 and PSTN 108. In addition [00195] [00195] In view of Figures 29A to 29D and the corresponding description of Figures 29A to 29D, one or more, or all, of the functions described here in relation to one or more of: A WTRU 102a-d, Base station 114a -b, enode B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-ab, UPF 184a-b, SMF 183a-b, DN 185a-b and / or any other devices described herein may be performed by one or more emulation devices (not shown). Emulation devices can be one or more devices configured to emulate one or more, or all, the functions described here. For example, emulation devices can be used to test other devices and / or to simulate network and / or WTRU functions. [00196] [00196] Emulation devices can be designed to implement one or more tests of other devices in a laboratory environment and / or in an operator network environment. For example, the one or more emulation devices can perform one or more, or all, of the functions while being fully or partially implemented / deployed as part of a wired and / or wireless communication network in order to to test other devices within the communication network. The one or more emulation devices can perform one or more, or all, of the functions while they are temporarily implemented / deployed as part of a wired and / or wireless communication network. The emulation device can be directly attached to another device for testing purposes and / or can perform tests using [00197] [00197] The one or more emulation devices can perform one or more, including all, functions while they are not implemented / implemented as part of a wired and / or wireless communication network. For example, emulation devices can be used in a test scenario in a test lab and / or on a wireless or wired (for example, test) communication network not deployed to implement testing of one or more components. The one or more emulation devices can be test equipment. Direct RF coupling and / or wireless communications via RF circuits (for example, which may include one or more antennas) can be used by emulation devices to transmit and / or receive data. [00198] [00198] Although the features and elements are described above in specific combinations, a person skilled in the art should consider that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described here can be implemented in a computer program, software or firmware embedded in computer-readable media for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted via wired and / or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, read-only memory (ROM), random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media, such as internal hard drives and removable discs, magneto-optical media and optical media, such as CD-ROM discs and / or digital versatile discs (DVDs). A processor in association with software can be used to implement a radio frequency transceiver for use in a WTRU, EU, terminal, base station, RNC and / or any host computer.
权利要求:
Claims (15) [1] 1. Video data encoding device, characterized by the fact that it comprises: a processor, the processor being configured to, at least: identify forecast information for a current encoding unit, with the forecast information comprising a first signal forecast associated with a first reference block and a second forecast signal associated with a second reference block; calculate a forecast difference between the first forecast signal and the second forecast signal; determine whether the first forecast signal and the second forecast signal are different based on the calculated forecast difference; and reconstruct the current coding unit based on the determination of whether the first forecast signal and the second forecast signal are different, with the bidirectional optical flow (BIO) being enabled for the current coding unit based on the determination of that the first forecast signal and the second forecast signal are different, and the BIO is disabled for the current coding unit based on the determination that the first forecast signal and the second forecast signal are not different. [2] 2. Device according to claim 1, characterized by the fact that the first forecast signal and the second forecast signal comprise an average difference between a first set of sample values associated with the first reference block and a second set of values sample associated with the second reference block. [3] 3. Device according to claim 2, characterized by the fact that the processor is additionally configured to: interpolate the first set of sample values from the first reference block; and interpolate the second set of sample values from the second reference block. [4] 4. Device according to claim 1, characterized in that the first forecast signal comprises a first motion vector that associates the current coding unit with the first reference block, the second forecast signal comprises a second motion vector that associates the current coding unit with the second reference block, and the calculation of the forecast difference between the first forecast signal and the second forecast signal comprises calculating a difference of motion vector between the first motion vector from of the first forecast signal and the second motion vector from the second forecast signal. [5] 5. Device according to claim 4, characterized by the fact that the first motion vector is associated with a first reference image in a first time distance from the current coding unit, the second motion vector is associated with a second reference image in a second time distance from the current coding unit, and the processor is additionally configured to: size the first motion vector associated with the first reference image based on the first time distance; and dimension the second motion vector associated with the second reference image based on the second motion vector scaled based on the second temporal distance, the difference of the motion vector being calculated using the first scaled motion vector and the second vector scaled motion. [6] 6. Device according to claim 1, characterized by the fact that the processor is configured to compare the forecast difference calculated for an enabled BIO limit, the first forecast signal and the second forecast signal being determined to be different when the forecast difference is greater than the BIO enabled limit. [7] 7. Device according to claim 6, characterized by the fact that the processor is additionally configured to determine the enabled BIO limit based on one or more of a desired level of complexity or a desired coding efficiency. [8] 8. Device according to claim 6, characterized by the fact that the processor is additionally configured to receive the BIO limit enabled through signaling. [9] 9. Device according to claim 1, characterized by the fact that, when the BIO is enabled for the current coding unit, the processor is additionally configured to: determine that the current coding unit is coded with a sub coding enabled block; identify forecast information for a current sub-block within the current coding unit, and the forecast information for the current sub-block comprises a third forecast signal associated with a first reference sub-block and a fourth forecast signal associated with a second reference sub-block; calculate a forecast difference between the third forecast signal and the fourth forecast signal; determine whether the third forecast signal and the fourth forecast signal are different based on the calculated forecast difference; and reconstruct the current sub-block based on the determination of whether the third forecast signal and the fourth forecast signal are different, with the BIO being enabled for the current sub-block based on the determination that the third forecast signal and the fourth forecast signal are different, and the BIO is disabled for the current sub-block based on the determination that the third forecast signal and the fourth forecast signal are not different. [10] 10. Method of encoding video data, characterized by the fact that it comprises: identifying forecast information for a current encoding unit, and the forecast information comprises a first forecast signal associated with a first reference block and a second forecast signal associated with a second reference block; calculate a forecast difference between the first forecast signal and the second forecast signal; determine whether the first forecast signal and the second forecast signal are different based on the calculated forecast difference; and rebuild the current coding unit based on determining the possibility that the first forecast signal and the second forecast signal are different, with the bidirectional optical flow (BIO) being enabled for the current coding unit based on the determination of that the first forecast signal and the second forecast signal are different, and the BIO is disabled for the current coding unit based on the determination that the first forecast signal and the second forecast signal are not different. [11] 11. Method according to claim 10, characterized in that the first forecast signal and the second forecast signal comprise an average difference between a first set of sample values associated with the first reference block and a second set of values sample associated with the second reference block. [12] 12. Method according to claim 11, characterized in that the first set of sample values is interpolated from the first reference block and the second set of sample values is interpolated from the second reference block. [13] 13. Method according to claim 10, characterized in that the first forecast signal comprises a first motion vector that associates the current coding unit with the first reference block, and the second forecast signal comprises a second vector of movement that associates the current coding unit with the second reference block, and the calculation of the forecast difference between the first forecast signal and the second forecast signal comprises calculating a difference of motion vector between the first motion vector from of the first forecast signal and the second motion vector from the second forecast signal. [14] 14. Method according to claim 13, characterized by the fact that the first motion vector is associated with a first reference image in a first time distance from the current coding unit and the second motion vector is associated with a second reference image in a second temporal distance from the current coding unit, the difference of motion vector being calculated using a first motion vector scaled based on the first time distance and a second scaled motion vector based on the second time distance. [15] 15. Method according to claim 1, characterized by the fact that it additionally comprises comparing the forecast difference calculated for an enabled BIO limit, with the first forecast signal and the second forecast signal being determined to be different when the forecast difference is greater than the BIO enabled limit.
类似技术:
公开号 | 公开日 | 专利标题 BR112020000032A2|2020-07-14|device and method of encoding video data. US10917660B2|2021-02-09|Prediction approaches for intra planar coding US20200288168A1|2020-09-10|Complexity reduction of overlapped block motion compensation KR20200095464A|2020-08-10|Method for Simplifying Adaptive Loop Filter in Video Coding WO2019006363A1|2019-01-03|Local illumination compensation using generalized bi-prediction CN111630855A|2020-09-04|Motion compensated bi-directional prediction based on local illumination compensation JP2021520710A|2021-08-19|Bidirectional optical flow method with simplified gradient derivation JP2021502038A|2021-01-21|Subblock motion derivation and decoder side motion vector refinement for merge mode JP2021502019A|2021-01-21|Multi-type tree coding US20220007048A1|2022-01-06|Bl-PREDICTION FOR VIDEO CODING US20210185353A1|2021-06-17|Overlapped block motion compensation KR20210118070A|2021-09-29|Inter- and intra-joint prediction JP2021529462A|2021-10-28|Selection of adaptive control points for video coding based on affine motion model JP2022518382A|2022-03-15|Improved intraplanar prediction with merge mode motion vector candidates KR20210074280A|2021-06-21|Affine Motion Estimation for Affine Model-Based Video Coding CN113383542A|2021-09-10|Improved intra plane prediction using merge mode motion vector candidates CN113396591A|2021-09-14|Methods, architectures, devices, and systems for improved linear model estimation for template-based video coding CN113383551A|2021-09-10|Systems, devices, and methods for inter-frame prediction refinement with optical flow
同族专利:
公开号 | 公开日 CN110832858A|2020-02-21| WO2019010156A1|2019-01-10| RU2019144982A3|2021-10-25| RU2763042C2|2021-12-27| KR20200033228A|2020-03-27| RU2019144982A|2021-06-30| EP3649780A1|2020-05-13| JP2020526964A|2020-08-31| US20200221122A1|2020-07-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2012093377A1|2011-01-07|2012-07-12|Nokia Corporation|Motion prediction in video coding| WO2017034089A1|2015-08-23|2017-03-02|엘지전자|Inter prediction mode-based image processing method and apparatus therefor| EP3332551A4|2015-09-02|2019-01-16|MediaTek Inc.|Method and apparatus of motion compensation for video coding based on bi prediction optical flow techniques| US10375413B2|2015-09-28|2019-08-06|Qualcomm Incorporated|Bi-directional optical flow for video coding| CN105847804B|2016-05-18|2017-12-15|信阳师范学院|A kind of up-conversion method of video frame rate based on sparse redundant representation model|CN111247806A|2017-10-27|2020-06-05|松下电器(美国)知识产权公司|Encoding device, decoding device, encoding method, and decoding method| JPWO2019155971A1|2018-02-06|2021-01-14|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Encoding device, decoding device, coding method and decoding method| EP3804324A1|2018-06-11|2021-04-14|Mediatek Inc.|Method and apparatus of bi-directional optical flow for video coding| JP2022506161A|2018-11-05|2022-01-17|北京字節跳動網絡技術有限公司|Interpolation for inter-prediction with refinement| US11153590B2|2019-01-11|2021-10-19|Tencent America LLC|Method and apparatus for video coding| CA3128112A1|2019-02-22|2020-08-27|Huawei Technologies Co., Ltd.|Method of video coding and encoder, decoder, computer-readable medium| CN113545069A|2019-03-03|2021-10-22|北京字节跳动网络技术有限公司|Motion vector management for decoder-side motion vector refinement| CN113519161A|2019-03-05|2021-10-19|Lg 电子株式会社|Method and apparatus for processing video signal for inter-frame prediction| CN113632484A|2019-03-15|2021-11-09|北京达佳互联信息技术有限公司|Method and apparatus for bit width control of bi-directional optical flow| WO2020200159A1|2019-03-29|2020-10-08|Beijing Bytedance Network Technology Co., Ltd.|Interactions between adaptive loop filtering and other coding tools| EP3949419A1|2019-04-12|2022-02-09|MediaTek Inc|Method and apparatus of simplified affine subblock process for video coding system| CN113906759A|2019-05-21|2022-01-07|北京字节跳动网络技术有限公司|Syntax-based motion candidate derivation in sub-block Merge mode| AU2020294736A1|2019-06-21|2022-01-27|Huawei Technologies Co., Ltd.|An encoder, a decoder and corresponding methods| CN113596479A|2019-06-21|2021-11-02|杭州海康威视数字技术股份有限公司|Encoding and decoding method, device and equipment| CN112135141A|2019-06-24|2020-12-25|华为技术有限公司|Video encoder, video decoder and corresponding methods| WO2021041332A1|2019-08-23|2021-03-04|Beijing Dajia Internet Information Technology Co., Ltd.|Methods and apparatus on prediction refinement with optical flow| WO2021054886A1|2019-09-20|2021-03-25|Telefonaktiebolaget Lm Ericsson |Methods of video encoding and/or decoding with bidirectional optical flow simplification on shift operations and related apparatus| CN113596463A|2019-09-23|2021-11-02|杭州海康威视数字技术股份有限公司|Encoding and decoding method, device and equipment| WO2021061322A1|2019-09-24|2021-04-01|Alibaba Group Holding Limited|Motion compensation methods for video coding|
法律状态:
2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762528296P| true| 2017-07-03|2017-07-03| US62/528,296|2017-07-03| US201762560823P| true| 2017-09-20|2017-09-20| US62/560,823|2017-09-20| US201762564598P| true| 2017-09-28|2017-09-28| US62/564,598|2017-09-28| US201762579559P| true| 2017-10-31|2017-10-31| US62/579,559|2017-10-31| US201762599241P| true| 2017-12-15|2017-12-15| US62/599,241|2017-12-15| PCT/US2018/040681|WO2019010156A1|2017-07-03|2018-07-03|Motion-compensation prediction based on bi-directional optical flow| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|